{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "fwukZZnNTYWE" }, "source": [ "
\n", "\n", "# Introduction to Natural Language Processing and Python\n", "\n", "Copyright, NLP from scratch, 2024.\n", "\n", "[NLPfor.me](https://www.nlpfor.me)\n", "\n", "------------" ] }, { "cell_type": "markdown", "metadata": { "id": "uuznFjxWVV0n" }, "source": [ "## Introduction 🎬\n", "In this notebook, we will cover an introduction of natural language processing, and many of the fundamentals of getting started with working with text data in python.\n", "\n", "If you are unfamiliar with working with Jupyter, please follow the [directions for setting up a local python environment and working with Jupyter](./assets/working_with_jupyer.pdf) and you may then download the notebook as a `.ipynb` and run in either Jupyter or Jupyterlab.\n", "\n", "This notebook covers the following topics:\n", "- Python Fundamentals and Working with Strings in Python\n", "- Regular Expressions and the [`re`](https://docs.python.org/3/library/re.html) module\n", "- The [`pandas`](https://pandas.pydata.org/) library, Dataframes and string data in Pandas" ] }, { "cell_type": "markdown", "metadata": { "id": "zvkULTtkc9Gn" }, "source": [ "## Python Fundamentals and working with Strings 🐍🧢\n", "\n", "\n", "\n", "Python is a great language to learn as it is easy to pick up even for the non-technical beginner with no prior programming experience. This is largely due to its simple syntax and structure. For natural language processing, working with text data is easy to do in modules included in base python, such as the `string` and `re` (regular expressions) modules, which we will introduce and work with here, but not cover exhaustively. There is also extensive text processing capabilities built into the [pandas](https://pandas.pydata.org/docs/user_guide/text.html) data science library which will be the focus of the last section." ] }, { "cell_type": "markdown", "metadata": { "id": "S-djydJcc9Gn" }, "source": [ "### Variables and Strings" ] }, { "cell_type": "markdown", "metadata": { "id": "SA74RxXac9Gn" }, "source": [ "Like all programming languages, python has different [variables](https://en.wikipedia.org/wiki/Variable_(computer_science)) which can hold values of different data types. Like other programming languages, python has *primitive data types*, the fundamental data storage structures of the language. Unlike lower-level languages like C, even primitive data types are stored as *objects*, which means they have associated functions built in (or more formally, *methods*) as we will see with string variables shortly." ] }, { "cell_type": "markdown", "metadata": { "id": "CQTxXriNc9Gn" }, "source": [ "We can define any variable in python using the equals operator:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "hBgvUH9mc9Gn" }, "outputs": [], "source": [ "x = 11.3" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "DFbdpCLic9Go", "outputId": "9de66e57-395d-407a-bfdc-c7b7c7d5ee52" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "11.3\n" ] } ], "source": [ "print(x)" ] }, { "cell_type": "markdown", "metadata": { "id": "R5Rr8QN1c9Go" }, "source": [ "If we want to know the type of any object, we can either call the `type` function which is built in to base python, or alternatively, every object in python also has a `.__class__` attribute." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "uayFroL6c9Go", "outputId": "3223d4c1-856f-4b90-89fe-301c8be12026" }, "outputs": [ { "data": { "text/plain": [ "float" ] }, "execution_count": 22, "metadata": {}, "output_type": "execute_result" } ], "source": [ "type(x)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "-EvYICT_c9Gp", "outputId": "d83cd1c9-4fa2-4537-b4a4-fcc78823b66c" }, "outputs": [ { "data": { "text/plain": [ "float" ] }, "execution_count": 23, "metadata": {}, "output_type": "execute_result" } ], "source": [ "x.__class__" ] }, { "cell_type": "markdown", "metadata": { "id": "QE9GRpn_c9Gp" }, "source": [ "Now that we have a basic understanding of variables in python, since this is training for natural language processing, we should get working with text data πŸ™‚ Python stores text as 'strings' - so called because they strings of single characters. Let's first define a string in Python:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "zJh0ThKZc9Gp", "outputId": "af4500c0-781d-4e3c-94c9-45566dff9369" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "This is a string about applesauce.\n" ] } ], "source": [ "my_string = \"This is a string about applesauce.\"\n", "print(my_string)" ] }, { "cell_type": "markdown", "metadata": { "id": "AmlUJho-c9Gp" }, "source": [ "In Jupyter, we can render text in the notebook using [Markdown](https://en.wikipedia.org/wiki/Markdown):" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "YF8WarGWc9Gp", "outputId": "76a98cfb-bac7-4d8e-b2c4-6c393e718daf" }, "outputs": [ { "data": { "text/markdown": [ "This is a string about applesauce." ], "text/plain": [ "" ] }, "execution_count": 28, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from IPython.display import Markdown\n", "\n", "Markdown(my_string)" ] }, { "cell_type": "markdown", "metadata": { "id": "1UhAy5Y4c9Gp" }, "source": [ "There, that looks a little better. Long strings can be defined in Python using the triple quote (`\"\"\"`) to open and close the string, and can span multiple lines:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "NFEaHkgqc9Gp" }, "outputs": [], "source": [ "my_long_string = \"\"\"This is my string of text data I'd like to work with. It contains letters, numbers such as 1234, punctuation \\\n", "such as commas, semicolons; other weird punctuation such as hashtags #, and also special characters such as \\\\n, \\\\r, and \\\\t, \\\n", "which represent linebreaks and tabs. I feel I should also mention applesauce.\"\"\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "f8SDWFM8c9Gp", "outputId": "5daf82fd-cf96-4707-f4ba-aeb57d7a4c05" }, "outputs": [ { "data": { "text/markdown": [ "This is my string of text data I'd like to work with. It contains letters, numbers such as 1234, punctuation such as commas, semicolons; other weird punctuation such as hashtags #, and also special characters such as \\n, \\r, and \\t, which represent linebreaks and tabs. I feel I should also mention applesauce." ], "text/plain": [ "" ] }, "execution_count": 269, "metadata": {}, "output_type": "execute_result" } ], "source": [ "Markdown(my_long_string)" ] }, { "cell_type": "markdown", "metadata": { "id": "-fwDIxkBc9Gq" }, "source": [ "A handy tool to know in python is using formatted strings, or [f-strings](https://docs.python.org/3/tutorial/inputoutput.html#formatted-string-literals) as they are known. These allow you to format values stored in variables into text variables conveniently for display, or for programatically generating output. F-strings are created by prefixing the quotes with an `f`, and variables to be included are placed between curly French braces (`{`,`}`):" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "7i7l-pcYc9Gq", "outputId": "2d069065-c463-4b6a-be31-ba65a81bbd46" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "This a formatted string. 1.337 is a float. 42 is an int.\n" ] } ], "source": [ "x = 1.337\n", "y = 42\n", "my_formatted_string = f'This a formatted string. {x} is a float. {y} is an int.'\n", "\n", "print(my_formatted_string)" ] }, { "cell_type": "markdown", "metadata": { "id": "DwwMNhvMc9Gq" }, "source": [ "### Strings as Arrays\n", "\n", "Strings in python are a type of *array* - a sequence of stored values - and can be treated as such. Indexing a string variable will return the single character at that index:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "IZgsJ0pAc9Gq", "outputId": "aa1eb6bd-9f7f-4f2a-bbea-efcaa489abdb" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "This is a string about applesauce.\n", "s\n" ] } ], "source": [ "# Full string\n", "print(my_string)\n", "\n", "# Character at position 4 in the string\n", "print(my_string[3])" ] }, { "cell_type": "markdown", "metadata": { "id": "LKQn8qlLc9Gq" }, "source": [ "Furthermore, we can subset a string by using a range for an index, and returning a *slice*. For example, if we just want the characters of the word applesauce, that would be characters 24 to 33:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "MZWC1GK7c9Gq", "outputId": "f0f76f6d-704d-42df-e6e7-67880a949064" }, "outputs": [ { "data": { "text/plain": [ "'applesauce'" ] }, "execution_count": 78, "metadata": {}, "output_type": "execute_result" } ], "source": [ "my_string[23:33]" ] }, { "cell_type": "markdown", "metadata": { "id": "DlztC_fNc9Gq" }, "source": [ "
\n", " ⚠ Indexing Quirks in Python ⚠ \n", " \n", "Note that python indexing is [zero-based](https://en.wikipedia.org/wiki/Zero-based_numbering), that is, the first element in an array in python is at index 0, **not** index 1.\n", " \n", "Furthermore, indexing in python in *inclusive* on the low side, but *exclusive* on the high side. That is, a slice created with index range `[4:9]` would be from the fifth character (since indexing is zero-based) up to, but not including the last character at index 9 (the tenth character). So `[4:9]` would be the five characters from the fifth character to the ninth (5 characters).\n", "\n", "Confusing, I know. This trips up even experienced python users.\n", "
" ] }, { "cell_type": "markdown", "metadata": { "id": "E80rGTb5c9Gq" }, "source": [ "Notice that index is *inclusive* of the first value, but *exclusive* of the last. Here, `my_string[23:33]` is characters 24 (since python is zero-indexed) to 33 (index 32). This does take some getting used to.\n", "\n", "When indexing, the first or last value can be omitted to index from the beginning or to the end, respectively:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "0Ir1IamEc9Gq", "outputId": "6d819f5f-3348-4b59-d107-9be4017a9f47" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "This is a string\n" ] } ], "source": [ "# Indexing from the beginning (omit first index)\n", "# First 16 characters\n", "print(my_string[:16])" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "lnVQLapCc9Gq", "outputId": "fb4a30b9-bfb0-42f2-c212-f5613f482b7c" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " about applesauce.\n" ] } ], "source": [ "# Indexing to the end (omit last index)\n", "# Characters 17 to end\n", "print(my_string[16:])" ] }, { "cell_type": "markdown", "metadata": { "id": "csoD9f1vc9Gq" }, "source": [ "We can find the length of any string (and other objects) in python using the built-in base function `len`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "bD5UuF-ic9Gq", "outputId": "274ec698-1165-446d-b4ef-d078fd9076c4" }, "outputs": [ { "data": { "text/plain": [ "34" ] }, "execution_count": 56, "metadata": {}, "output_type": "execute_result" } ], "source": [ "len(my_string)" ] }, { "cell_type": "markdown", "metadata": { "id": "sAd02hAtc9Gq" }, "source": [ "### Other Arrays in Python" ] }, { "cell_type": "markdown", "metadata": { "id": "Gz3iAf-Qc9Gq" }, "source": [ "An array is a type of data structure which is common across many different languages; in python, there are several different types of arrays including [lists](https://docs.python.org/3/library/stdtypes.html#list), [tuples](https://docs.python.org/3/library/stdtypes.html#tuple), [dicts](https://docs.python.org/3/library/stdtypes.html#dict), and [sets](https://docs.python.org/3/library/stdtypes.html#set).\n", "\n", "We can create any list and subset different indices or slices of it:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "executionInfo": { "elapsed": 11, "status": "ok", "timestamp": 1721612017309, "user": { "displayName": "Myles Harrison", "userId": "13636460506782883737" }, "user_tz": 240 }, "id": "p2i7VPVKEftm", "outputId": "489e3423-81c0-4164-ddd7-e12e7d3a542c" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0\n", "['a', 111]\n" ] } ], "source": [ "mylist = ['a', 111, False, 0, 55.4, 'applesauce']\n", "\n", "# single element\n", "print(mylist[3])\n", "\n", "# slice\n", "print(mylist[0:2])" ] }, { "cell_type": "markdown", "metadata": { "id": "OXDp2F_jErwH" }, "source": [ "Python treats strings just as arrays of single characters, so subsetting strings is an equivalent operation:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "executionInfo": { "elapsed": 157, "status": "ok", "timestamp": 1721612106464, "user": { "displayName": "Myles Harrison", "userId": "13636460506782883737" }, "user_tz": 240 }, "id": "TcJpIiJUErZX", "outputId": "9b128aa6-f098-4580-8005-aa3249038662" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "r\n", "Spain\n" ] } ], "source": [ "mystring = \"The rain in Spain\"\n", "\n", "print(mystring[4])\n", "\n", "print(mystring[12:17])" ] }, { "cell_type": "markdown", "metadata": { "id": "h7FiFxQmc9Gr" }, "source": [ "### String Methods" ] }, { "cell_type": "markdown", "metadata": { "id": "IutwG55Dc9Gr" }, "source": [ "Every string variable in Python is not a primitive (such as in languages like C), but actually an object of the string class. They contain methods for common text-based operations that are very straightforward to use. For example, we can change text case using `.upper`, `.lower`, and `title`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "IK9Ztt6Lc9Gr", "outputId": "51239d5f-6183-41b1-a4a4-15cc325dd0ac" }, "outputs": [ { "data": { "text/plain": [ "'THIS IS A STRING ABOUT APPLESAUCE.'" ] }, "execution_count": 57, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Upper case\n", "my_string.upper()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "rA3uPrELc9Gr", "outputId": "80a8afba-89c0-4536-f507-ba74c8c40277" }, "outputs": [ { "data": { "text/plain": [ "'this is a string about applesauce.'" ] }, "execution_count": 58, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Lower case\n", "my_string.lower()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "09dTwmxTc9Gr", "outputId": "ce72719b-f69f-4a80-af39-3918fcb0b646" }, "outputs": [ { "data": { "text/plain": [ "'This Is A String About Applesauce.'" ] }, "execution_count": 59, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Title case\n", "my_string.title()" ] }, { "cell_type": "markdown", "metadata": { "id": "0F41NumZc9Gr" }, "source": [ "We can also replace every occurrence of a substring within a given string with another substring, using the `.replace` method:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Q3b93hdVc9Gu", "outputId": "44490afa-0b51-493c-c56b-a31279cccb1f" }, "outputs": [ { "data": { "text/plain": [ "'Th_EYE_s _EYE_s a str_EYE_ng about applesauce.'" ] }, "execution_count": 62, "metadata": {}, "output_type": "execute_result" } ], "source": [ "my_string.replace(\"i\", \"_EYE_\")" ] }, { "cell_type": "markdown", "metadata": { "id": "TgCgy5kWc9Gu" }, "source": [ "To search for substrings within a given string, we can use the `find` method. This will return the index of the first character of the first substring match:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "4InKMvqVc9Gu", "outputId": "67b82f76-87f4-41c2-8969-c3f974d557be" }, "outputs": [ { "data": { "text/plain": [ "23" ] }, "execution_count": 64, "metadata": {}, "output_type": "execute_result" } ], "source": [ "my_string.find(\"applesauce\")" ] }, { "cell_type": "markdown", "metadata": { "id": "Weomppxjc9Gu" }, "source": [ "This can then be used in combination with indexing as we saw before:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "TR10uYhac9Gu", "outputId": "9cb3c166-ade1-4f5d-8bf6-c9638981da10" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "23\n" ] }, { "data": { "text/plain": [ "'applesauce.'" ] }, "execution_count": 70, "metadata": {}, "output_type": "execute_result" } ], "source": [ "applesauce_index = my_string.find(\"applesauce\")\n", "\n", "print(applesauce_index)\n", "\n", "my_string[applesauce_index:]" ] }, { "cell_type": "markdown", "metadata": { "id": "iK_am_xRc9Gu" }, "source": [ "### The Most Useful String Methods" ] }, { "cell_type": "markdown", "metadata": { "id": "zY5k1VrAc9Gu" }, "source": [ "By far, the most commonly used and useful string method is `.replace()` for replacing substrings:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "LUqkiKdAc9Gu", "outputId": "75e78919-fc73-4996-f874-b888cece688f" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "I love applesauace, it's the best sauce! That's what I thought.\n", "I love @pples@uace, it's the best sauce! That's what I think.\n", "I love applesauace, it's the best sauce! That's what I think.\n" ] } ], "source": [ "unclean_text = \"I love @pples@uace, it's the best sauce! That's what I thought.\"\n", "\n", "# Replacing a single character\n", "print(unclean_text.replace('@', 'a'))\n", "\n", "# Replacing a substring\n", "print(unclean_text.replace('thought', 'think'))\n", "\n", "# Since string methods return another string, we can 'chain' methods\n", "print(unclean_text.replace('@','a').replace('thought', 'think'))" ] }, { "cell_type": "markdown", "metadata": { "id": "c8V1gic8c9Gv" }, "source": [ "A close second is that of `.split()`, which uses a specified character as a delimited, and returns an array of substring split by that character (a list). For example, if we have a comma-separated list that we read from a CSV:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "xtyaz9kxc9Gv", "outputId": "3cc88695-8143-4347-fd66-79609aafb746" }, "outputs": [ { "data": { "text/plain": [ "['apple', ' banana', ' cherry', ' mango']" ] }, "execution_count": 96, "metadata": {}, "output_type": "execute_result" } ], "source": [ "row = \"apple, banana, cherry, mango\"\n", "\n", "row.split(\",\")" ] }, { "cell_type": "markdown", "metadata": { "id": "n5ytyP1mc9Gv" }, "source": [ "Conversely, we can rejoin a list of substrings together around a delimiting character using `.join`. Somewhat unintuitively, this method acts *on* the joining character and takes the list of substrings as input, so they joining character is specified first:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "8wzPrMJic9Gv", "outputId": "d9d582e4-da1a-4878-85cb-9dee7cb52012" }, "outputs": [ { "data": { "text/plain": [ "'2023-09-01'" ] }, "execution_count": 100, "metadata": {}, "output_type": "execute_result" } ], "source": [ "substrings = ['2023', '09', '01']\n", "\"-\".join(substrings)" ] }, { "cell_type": "markdown", "metadata": { "id": "AcS8FWm3c9Gv" }, "source": [ "Knowing all these string methods is useful for preprocessing and cleaning text data. For example, for normalizing text by making it all lowercase and removing punctuation. In this case, we replace characters we wish to remove with the empty string, `''`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Li13DboPc9Gv" }, "outputs": [], "source": [ "sample_text = \\\n", "\"\"\"\n", "This is a block of text with capitalization and also punctuation, including a semi-colon; oh, and a period.\n", "\"\"\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Mh5Gj4zrc9Gv", "outputId": "9553878b-bbe1-492b-adb2-d54018ffb014" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "this is a block of text with capitalization and also punctuation including a semi-colon oh and a period\n", "\n" ] } ], "source": [ "print(sample_text.lower().replace('.','').replace(',','').replace(';',''))" ] }, { "cell_type": "markdown", "metadata": { "id": "TWMiMhrKc9Gv" }, "source": [ "## Regular Expressions *️⃣⁉" ] }, { "cell_type": "markdown", "metadata": { "id": "PJdXfD62c9Gv" }, "source": [ "Regular expressions, often abbreviated as regex (*\"rehΒ·jeks\"*) is a general computer science construct for doing advanced pattern matching. Regular expressions are a kind of language to flexibly describe patterns using different character classes and modifiers. They provide a way to define complex search patterns in text for search or find and replace type operations.\n", "\n", "In Python, regular expressions functions are part of the base `re` module. Though it is part of base python, we do still need to import it in order to use it:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "TueSZZ9hc9Gv" }, "outputs": [], "source": [ "import re" ] }, { "cell_type": "markdown", "metadata": { "id": "M41FtocAc9Gw" }, "source": [ "Now that we have `re` at our disposal, let's built our first regular expression. We want to match any digit in a string. In regex, this can be written as `[0-9]` which is interpreted as any digit from 0 to 9. Furthermore, we need to specify how many occurrences should be expected in the match. There are few ways to do this in regex using *quantifiers*:\n", "- `?` means exactly zero or one match\n", "- `*` means zero or more matches\n", "- `+` one or more matches\n", "- `{n}` matches exactly *n* times.\n", "- `{n,}` matches at least *n* times.\n", "- `{n,m}` matches at least *n* times." ] }, { "cell_type": "markdown", "metadata": { "id": "NPft5pc-c9Gw" }, "source": [ "Furthermore, there are multiple functions for working with regular expressions using `re`:\n", "- `re.match`: is used to check if a pattern matches at the beginning of a string. It returns a match object if the pattern is found at the beginning of the string; otherwise, it returns `None`. This is usually used when you just want to check if a string starts with a specific pattern.\n", "- `re.search`: is used to search for a pattern anywhere in a given string. It returns a match object if the pattern is found anywhere in the string; otherwise, it returns `None`. This is useful for when you want to find the first occurrence of a pattern within a string.\n", "- `re.findall`: is used to find all occurrences of a pattern in a string. It returns a list of all matches found in the string. This most useful for finding extracting all occurrences of a pattern from a string.\n", "- `re.sub`: is used for find and replace of patterns in a string with a specified replacement string. It returns a new string where all occurrences of the pattern in the original string are replaced with the specified replacement string. Unlike using `.replace` with a regular string, here we can also replace substrings specified using regular expressions so this is much more flexible and powerful." ] }, { "cell_type": "markdown", "metadata": { "id": "f8nnFYNSc9Gw" }, "source": [ "Let's try it out! First we will search for the phone number, 3 single digits, followed by a dash, followed by 4 digits, specified by the regex pattern `[0-9]{3}-[0-9]{4}`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "ECvAX80Tc9Gw", "outputId": "d7ce38d5-1398-4791-b1e8-228bd191192c" }, "outputs": [ { "data": { "text/plain": [ "" ] }, "execution_count": 26, "metadata": {}, "output_type": "execute_result" } ], "source": [ "my_string = \"Here is a string with a phone number in it: 867-5309\"\n", "\n", "# Prefix with 'r' for regex pattern\n", "regex_pattern = r\"[0-9]{3}-[0-9]{4}\"\n", "\n", "re.search(regex_pattern, my_string)" ] }, { "cell_type": "markdown", "metadata": { "id": "iRo4shrAc9Gw" }, "source": [ "Again, `re` returns `Match` objects which include the index of the match. If we wanted to pull out the substring, we can use the span values contained therein:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "RBx1qY_Xc9Gw", "outputId": "827af5a7-82e9-422b-fb47-81b644d3e3c9" }, "outputs": [ { "data": { "text/plain": [ "(44, 52)" ] }, "execution_count": 28, "metadata": {}, "output_type": "execute_result" } ], "source": [ "match = re.search(regex_pattern, my_string)\n", "match.span()" ] }, { "cell_type": "markdown", "metadata": { "id": "gmcdacxbc9Gw" }, "source": [ "And we can now use this to subset the original string and only pull out the phone number:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "GgCL--G-c9Gw", "outputId": "26a9a5fd-df49-46fe-d3fe-fb8466907f18" }, "outputs": [ { "data": { "text/plain": [ "'867-5309'" ] }, "execution_count": 32, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Pull out the start and end indices\n", "start_index, end_index = match.span()\n", "\n", "# Substring\n", "my_string[match.span()[0]:match.span()[1]]" ] }, { "cell_type": "markdown", "metadata": { "id": "zZ5NENhLc9Gw" }, "source": [ "Now let's do a more complicated example, where we pull out all the prices from a piece of copy. Here, we are interested in extracting values with a `$`, followed by one or more digits `[0-9]`, followed by a period `.`, followed by exactly 2 digits, `[0-9]`.\n", "\n", "Hence, our new regex pattern should be:\n", "- The dollar sign `$`. This is a special character, so we need to \"escape\" it by prepending it with a backslash: `\\$`.\n", "- One or more of the digits 0-9: `[0-9]+`\n", "- Followed by a period `.`\n", "- Exactly two digits `[0-9]{2}`\n", "\n", "So our final regex patterns is `[0-9]+.[0-9]{2}`. Let's try it out:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "patG4qfFc9Gw" }, "outputs": [], "source": [ "website_copy = \\\n", "\"\"\"\n", "Discover an incredible range of products at unbeatable prices on our retail website.\n", "Whether you're looking for budget-friendly bargains or premium selections, we've got you covered.\n", "Dive into our collection, where you can find fantastic deals like a set of stylish $0.99 smartphone accessories to elevate your tech game.\n", "If you're in the market for high-quality home appliances, explore our kitchen section, where you can find appliances ranging from $199.99\n", "for a sleek microwave oven to a luxurious $1299.99 espresso machine for coffee connoisseurs.\n", "We offer a diverse selection of products to suit every budget, making shopping with us a delightful experience for all.\n", "\"\"\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "KsFxnonNc9Gw", "lines_to_next_cell": 2, "outputId": "77c0ab4f-e962-4b35-def1-b6a7b6010c6b" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Matches: ['$0.99', '$199.99', '$1299.99']\n" ] } ], "source": [ "# From above\n", "regex_pattern = r\"\\$[0-9]+.[0-9]{2}\"\n", "matches = re.findall(regex_pattern, website_copy)\n", "print(\"Matches:\", matches)" ] }, { "cell_type": "markdown", "metadata": { "id": "6IgMVr-0c9Gw" }, "source": [ "We can see above that we've succesfully pulled out the different dollar values from the website copy. However, if we had more complicated and varied expressions, we'd need to work to make sure our regex capture all the different possible variations. For example, in the below, if we include a prices with a comma delimiter, our current regex will not catch them:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "8tfURvp_c9Gw" }, "outputs": [], "source": [ "website_copy_2 = \\\n", "\"\"\"\n", "Fall in love with our delightful assortment of stuffed animals, where cuddly companions come in all price ranges.\n", "For those seeking an adorable plush friend without breaking the bank, we offer charming options like a lovable teddy bear for just $9.99.\n", "Looking to make a grand gesture or celebrate a special occasion? Our premium collection includes exquisite, handcrafted stuffed animals, such as a\n", "majestic life-sized lion for $599.99 or a whimsical unicorn adorned with Swarovski crystals for a lavish $1,199.99.\n", "This could also be written as $1199.99.\n", "Whether you're on a budget or ready to splurge, our stuffed animal selection promises to bring joy and comfort to your life.\n", "Shop now and find the perfect plush partner for every occasion.\n", "\"\"\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "0TuCHgtPc9Gx", "outputId": "a426d8fc-039d-4e18-980d-ce0b53f85879" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Matches: ['$9.99', '$599.99', '$1,19']\n" ] } ], "source": [ "# From above\n", "regex_pattern = r\"\\$[0-9]+.[0-9]{2}\"\n", "matches = re.findall(regex_pattern, website_copy_2)\n", "print(\"Matches:\", matches)" ] }, { "cell_type": "markdown", "metadata": { "id": "hOxTPz-lc9Gx" }, "source": [ "We can see that it failed to capture the final `$1,199.99` price correctly. We'd need to write a more flexible expression which optionally included the comma delimiters.\n", "\n", "We can update our pattern:\n", "- The dollar sign `$`. This is a special character, so we need to \"escape\" it by prepending it with a backslash: `\\$`.\n", "- Zero or more of the digits 0-9: `[0-9]*`\n", "- Followed by no comma or a single comma `\\,?`. Commas are also special regex characters and so must be escaped with a backslash.\n", "- Then our original pattern again: zero or more digits, followed by a period, followed by exactly two digits: `[0-9]+\\.[0-9]{2}`\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "aYGVQ46Kc9Gx", "outputId": "896968ae-fed4-4f68-a222-142c424fbc85" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Matches: ['$9.99', '$599.99', '$1,199.99', '$1199.99']\n" ] } ], "source": [ "new_regex_pattern = r\"\\$[0-9]*\\,?[0-9]+\\.[0-9]{2}\"\n", "matches = re.findall(new_regex_pattern, website_copy_2)\n", "print(\"Matches:\", matches)" ] }, { "cell_type": "markdown", "metadata": { "id": "pDa7qn3nc9Gx" }, "source": [ "Not that this regex is still not \"perfect\" and will capture expressions we may not want such as if a comma appeared without preceding digits:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "TIHWLtQdc9Gx" }, "outputs": [], "source": [ "website_copy_3 = \\\n", "\"\"\"\n", "I love writing website copy, here is an incorrectly entered price: $,245.22\n", "\"\"\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "IPzzX8S-c9Gx", "outputId": "d1d187b4-7896-4984-f744-9440089c469a" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Matches: ['$,245.22']\n" ] } ], "source": [ "new_regex_pattern = r\"\\$[0-9]*\\,?[0-9]+\\.[0-9]{2}\"\n", "matches = re.findall(new_regex_pattern, website_copy_3)\n", "print(\"Matches:\", matches)" ] }, { "cell_type": "markdown", "metadata": { "id": "by39IdW-c9Gx" }, "source": [ "Writing highly specific regular expressions to capture exactly what is wanted or needed without any edge cases is a topic all its own. For example, no standard regular expression for capturing domain names exists, [given the variability thereof](https://www.oreilly.com/library/view/regular-expressions-cookbook/9781449327453/ch08s15.html).\n", "\n", "In general, writing regex which are \"good enough\" to capture what is needed, and determing what this comprises, is part of the work." ] }, { "cell_type": "markdown", "metadata": { "id": "kedVCR0ic9Gx" }, "source": [ "### Activity: Prompting ChatGPT for Regex" ] }, { "cell_type": "markdown", "metadata": { "id": "OB32crWoc9Gx" }, "source": [ "One thing that ChatGPT (and other LLMs) are particularly good at is writing regular expressions! Let's give [ChatGPT](https://www.chatgpt.com) some prompts and see if it can come up with the correct regex for the following:\n", "- Phone numbers\n", "- Email addresses\n", "- ISO Date Formats\n", "\n", "You can test these out using [regex101](https://regex101.com/) afterward. How did it perform?" ] }, { "cell_type": "markdown", "metadata": { "id": "4PsnECdUc9Gx" }, "source": [ "## Intro to Pandas 🐼\n", "\n", "While a lot can be accomplished in base python, most of the base python data structures are not suitable for doing serious machine learning and natural language processing work.\n", "\n", "As such, the vast majority of ML work in python is doing using the *data science stack*, or what I refer to as the \"holy trinity of data science\":\n", "- the [numpy](https://numpy.org/) library for doing numerical computation (*i.e.* working with vectors and matrices)\n", "- the [pandas](https://pandas.pydata.org/) library for data manipulation and working with structured data\n", "- the [matplotlib](https://matplotlib.org/) library for visualizing data\n", "\n", "In this section, we will work only with pandas, and as we shall see, it is built on top of numpy and also has matplotlib functionality integrated within it, which makes it possible to visualize data without needing to use the latter directly." ] }, { "cell_type": "markdown", "metadata": { "id": "oMhWCGrSc9Gx" }, "source": [ "### Series and DataFrames\n", "\n", "Pandas works with abstractions that should be familiar to any data practitioner: [Series](https://pandas.pydata.org/docs/reference/api/pandas.Series.html), which correspond to columns of data composed of individual elements, and [DataFrames]( ) which are correspond to the familiar abstraction of tables composed of columns. Therefore, a DataFrame is composed of multiple Series, each making up a single column therein.\n", "\n", "Each Series must store data of one and only one type, therefore each column of a pandas DataFrame must all contain values of the same data type.\n", "\n", "This is all getting a bit abstract, so let's take a look at a simple example in the context of text data.\n", "\n", "Traditionally, pandas is imported as `pd`, and the sub-modules, functions, and classes within it called from within. Let's create a new Series of text data:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "5pSCF5T2c9Gx", "outputId": "c88a878a-df69-4780-cfe2-366afccdc173" }, "outputs": [ { "data": { "text/plain": [ "0 applesauce\n", "1 beluga caviar\n", "2 cobbler\n", "3 dijon\n", "dtype: object" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "import pandas as pd\n", "\n", "# Text data\n", "text_data = ['applesauce', 'beluga caviar', 'cobbler', 'dijon']\n", "\n", "# Plunk into a series\n", "text_series = pd.Series(my_text_data)\n", "\n", "# Show\n", "display(text_series)" ] }, { "cell_type": "markdown", "metadata": { "id": "3veZocffc9Gx" }, "source": [ "We can see that each element in a Series has an associated *index*. This corresponds roughly to the primary key (id) of a table in a database. Just a we can reference elements in any array in python using their numeric index (as we say with subsetting strings) we can also pull out individual elements and slices of a pandas Series using indexing:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "47r-_Hbgc9Gx", "outputId": "b3347b45-764e-41fc-f94d-3aba3540ec1f" }, "outputs": [ { "data": { "text/plain": [ "'beluga caviar'" ] }, "execution_count": 103, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Remember that indexes in python start from 0\n", "text_series[1]" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "bh23yzxxc9Gx", "outputId": "2f3b65e3-0107-4075-af1a-149bece13e16" }, "outputs": [ { "data": { "text/plain": [ "1 beluga caviar\n", "2 cobbler\n", "dtype: object" ] }, "execution_count": 104, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Slice\n", "text_series[1:3]" ] }, { "cell_type": "markdown", "metadata": { "id": "BT1_MndRc9Gy" }, "source": [ "As we saw with regular indexing in python, it is *inclusive* on the low end, and *exclusive* on the high end, so the slice `[1:3]` returns elements 2 (since python is zero-indexed) up to, but not including, element 4 (at index 3)." ] }, { "cell_type": "markdown", "metadata": { "id": "k5ftSqUjc9Gy" }, "source": [ "Indices can be manipulated and need not be numeric (though they usually are). We can replace the index of a given Series:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "BrBeqWkpc9Gy", "outputId": "a17fda63-f2e7-47e0-c700-58122c64978a" }, "outputs": [ { "data": { "text/plain": [ "a applesauce\n", "b beluga caviar\n", "c cobbler\n", "d dijon\n", "dtype: object" ] }, "execution_count": 108, "metadata": {}, "output_type": "execute_result" } ], "source": [ "text_series.index = ['a','b','c','d']\n", "text_series" ] }, { "cell_type": "markdown", "metadata": { "id": "VHgevtW5c9Gy" }, "source": [ "We can then reference elements by using the new index:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "YRsYUJR_c9Gy", "outputId": "eef9b826-1acc-405d-def8-2016839f6f87" }, "outputs": [ { "data": { "text/plain": [ "'applesauce'" ] }, "execution_count": 114, "metadata": {}, "output_type": "execute_result" } ], "source": [ "text_series['a']" ] }, { "cell_type": "markdown", "metadata": { "id": "Y1AS51q3c9Gy" }, "source": [ "Confusingly, when using ranges that are non-numeric, they are *inclusive* on the both sides:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "iYoaMMQyc9Gy", "outputId": "292af2c6-6147-4076-81be-89641f875695" }, "outputs": [ { "data": { "text/plain": [ "a applesauce\n", "b beluga caviar\n", "c cobbler\n", "dtype: object" ] }, "execution_count": 115, "metadata": {}, "output_type": "execute_result" } ], "source": [ "text_series['a':'c']" ] }, { "cell_type": "markdown", "metadata": { "id": "G0RyJ03yc9Gy" }, "source": [ "Despite replacing the default index, we can still use numeric indexing (this is always an option):" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "IoY49RRSc9Gy", "outputId": "71f832c8-9f3d-4f80-9975-1f099d0970f9" }, "outputs": [ { "data": { "text/plain": [ "a applesauce\n", "b beluga caviar\n", "c cobbler\n", "dtype: object" ] }, "execution_count": 117, "metadata": {}, "output_type": "execute_result" } ], "source": [ "text_series[0:3]" ] }, { "cell_type": "markdown", "metadata": { "id": "eEp1myf8c9Gy" }, "source": [ "Finally, any series work in data science work done in Python would be using the pandas library. In pandas, we work with [DataFrames](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html), which are like tables in Excel or a database. Pandas has the `.str` accessor, which can efficiently apply any base string method to a column of data element-wise:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "VzVTDEOJc9Gy", "outputId": "0305e388-3566-46fc-9038-1e6a1d94e8c5" }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
Customer_IDProduct
0C123Widget
1C456Gadget
2C789Widget
3C101Doodad
\n", "
" ], "text/plain": [ " Customer_ID Product\n", "0 C123 Widget\n", "1 C456 Gadget\n", "2 C789 Widget\n", "3 C101 Doodad" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
Customer_IDProductCustomer_Type
0C123WidgetType 123
1C456GadgetType 456
2C789WidgetType 789
3C101DoodadType 101
\n", "
" ], "text/plain": [ " Customer_ID Product Customer_Type\n", "0 C123 Widget Type 123\n", "1 C456 Gadget Type 456\n", "2 C789 Widget Type 789\n", "3 C101 Doodad Type 101" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "import pandas as pd\n", "\n", "# Create a sample DataFrame with customer IDs\n", "data = {'Customer_ID': ['C123', 'C456', 'C789', 'C101'],\n", " 'Product': ['Widget', 'Gadget', 'Widget', 'Doodad']}\n", "text_df = pd.DataFrame(data)\n", "\n", "# Show (before)\n", "display(text_df)\n", "\n", "# Applying the .replace() method using the .str accessor\n", "text_df['Customer_Type'] = text_df['Customer_ID'].str.replace('C', 'Type ')\n", "\n", "# Show (after)\n", "display(text_df)" ] }, { "cell_type": "markdown", "metadata": { "id": "ARip3XRBG-uK" }, "source": [ "# Reading Data with Pandas\n", "\n", "You can read in existing data in many different formats with pandas. Let's use by far the most common method for doing so to read in a CSV file from the NLP from scratch [datasets repo](https://github.com/nlpfromscratch/datasets) on Github:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "executionInfo": { "elapsed": 6483, "status": "ok", "timestamp": 1721612716025, "user": { "displayName": "Myles Harrison", "userId": "13636460506782883737" }, "user_tz": 240 }, "id": "PczfOZjtG-W0", "outputId": "fd49c58e-05a4-4ad2-f42c-ad4525abbbbb" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Cloning into 'datasets'...\n", "remote: Enumerating objects: 70, done.\u001b[K\n", "remote: Counting objects: 100% (70/70), done.\u001b[K\n", "remote: Compressing objects: 100% (62/62), done.\u001b[K\n", "remote: Total 70 (delta 14), reused 61 (delta 8), pack-reused 0\u001b[K\n", "Receiving objects: 100% (70/70), 34.61 MiB | 18.02 MiB/s, done.\n", "Resolving deltas: 100% (14/14), done.\n", "Updating files: 100% (27/27), done.\n" ] } ], "source": [ "# Clone the repo on local drive of the notebook instance\n", "!git clone https://github.com/nlpfromscratch/datasets" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 206 }, "executionInfo": { "elapsed": 182, "status": "ok", "timestamp": 1721612954762, "user": { "displayName": "Myles Harrison", "userId": "13636460506782883737" }, "user_tz": 240 }, "id": "aOpD2XI2HU3P", "outputId": "25b1be9c-151a-4604-d0c9-8022972912aa" }, "outputs": [ { "data": { "application/vnd.google.colaboratory.intrinsic+json": { "summary": "{\n \"name\": \"emoji_df\",\n \"rows\": 50,\n \"fields\": [\n {\n \"column\": \"text\",\n \"properties\": {\n \"dtype\": \"string\",\n \"num_unique_values\": 50,\n \"samples\": [\n \"\\ud83e\\udd17\\ud83d\\udcab Spread kindness like confetti and watch the world sparkle around you! \\ud83c\\udf8a\\u2728\",\n \"Don't be afraid to explore the darker corners of life; they reveal hidden treasures! \\ud83d\\udc80\\u2728 Embrace the adventure! \\ud83c\\udf08\\ud83d\\ude80\",\n \"Hey! \\ud83e\\udddb\\u200d\\u2642\\ufe0f Remember, fear is just a ghost that vanishes when you face it! You've got this! \\ud83d\\udc7b\\ud83d\\udcaa\"\n ],\n \"semantic_type\": \"\",\n \"description\": \"\"\n }\n }\n ]\n}", "type": "dataframe", "variable_name": "emoji_df" }, "text/html": [ "\n", "
\n", "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
text
0🌟 Hey there! Sending you a wave of 🌊 positivit...
1πŸŽ‰ Just wanted to remind you that you're amazin...
2🌻 Rise and shine! It's a new day full of oppor...
3πŸ€ Wishing you luck and a day filled with joy, ...
4πŸ€— Hey, just dropping by to say hi and sending ...
\n", "
\n", "
\n", "\n", "
\n", " \n", "\n", " \n", "\n", " \n", "
\n", "\n", "\n", "
\n", " \n", "\n", "\n", "\n", " \n", "
\n", "\n", "
\n", "
\n" ], "text/plain": [ " text\n", "0 🌟 Hey there! Sending you a wave of 🌊 positivit...\n", "1 πŸŽ‰ Just wanted to remind you that you're amazin...\n", "2 🌻 Rise and shine! It's a new day full of oppor...\n", "3 πŸ€ Wishing you luck and a day filled with joy, ...\n", "4 πŸ€— Hey, just dropping by to say hi and sending ..." ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Read in the file with pandas\n", "emoji_df = pd.read_csv('datasets/emoji_sms/train.csv')\n", "\n", "# Show a sample of the data\n", "emoji_df.head()" ] }, { "cell_type": "markdown", "metadata": { "id": "EvprYSrdHv-O" }, "source": [ "Note that pandas can read csv files (and other types of data) which are hosted online directly by passing the [URL as the filepath](](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html#pandas-read-csv):" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 206 }, "executionInfo": { "elapsed": 589, "status": "ok", "timestamp": 1721613001524, "user": { "displayName": "Myles Harrison", "userId": "13636460506782883737" }, "user_tz": 240 }, "id": "v-0AABu0IMFJ", "outputId": "245f45bf-f386-467b-d242-d590eb41385b" }, "outputs": [ { "data": { "application/vnd.google.colaboratory.intrinsic+json": { "summary": "{\n \"name\": \"emoji_df\",\n \"rows\": 50,\n \"fields\": [\n {\n \"column\": \"text\",\n \"properties\": {\n \"dtype\": \"string\",\n \"num_unique_values\": 50,\n \"samples\": [\n \"\\ud83e\\udd17\\ud83d\\udcab Spread kindness like confetti and watch the world sparkle around you! \\ud83c\\udf8a\\u2728\",\n \"Don't be afraid to explore the darker corners of life; they reveal hidden treasures! \\ud83d\\udc80\\u2728 Embrace the adventure! \\ud83c\\udf08\\ud83d\\ude80\",\n \"Hey! \\ud83e\\udddb\\u200d\\u2642\\ufe0f Remember, fear is just a ghost that vanishes when you face it! You've got this! \\ud83d\\udc7b\\ud83d\\udcaa\"\n ],\n \"semantic_type\": \"\",\n \"description\": \"\"\n }\n }\n ]\n}", "type": "dataframe", "variable_name": "emoji_df" }, "text/html": [ "\n", "
\n", "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
text
0🌟 Hey there! Sending you a wave of 🌊 positivit...
1πŸŽ‰ Just wanted to remind you that you're amazin...
2🌻 Rise and shine! It's a new day full of oppor...
3πŸ€ Wishing you luck and a day filled with joy, ...
4πŸ€— Hey, just dropping by to say hi and sending ...
\n", "
\n", "
\n", "\n", "
\n", " \n", "\n", " \n", "\n", " \n", "
\n", "\n", "\n", "
\n", " \n", "\n", "\n", "\n", " \n", "
\n", "\n", "
\n", "
\n" ], "text/plain": [ " text\n", "0 🌟 Hey there! Sending you a wave of 🌊 positivit...\n", "1 πŸŽ‰ Just wanted to remind you that you're amazin...\n", "2 🌻 Rise and shine! It's a new day full of oppor...\n", "3 πŸ€ Wishing you luck and a day filled with joy, ...\n", "4 πŸ€— Hey, just dropping by to say hi and sending ..." ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Read directly from url / online source\n", "emoji_df = pd.read_csv('https://raw.githubusercontent.com/nlpfromscratch/datasets/master/emoji_sms/train.csv')\n", "\n", "# Show\n", "emoji_df.head()" ] }, { "cell_type": "markdown", "metadata": { "id": "PiIkj0dLc9Gy" }, "source": [ "## Conclusion\n", "That concludes the workshop! I hope you've enjoyed getting started with the python programming language and natural language processing. We will continue next week with acquriing " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "----\n", "\n", "\n", " \n", " \n", " \n", " \n", "\n", "

Copyright NLP from scratch, 2024.

" ] } ], "metadata": { "colab": { "provenance": [] }, "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.6" } }, "nbformat": 4, "nbformat_minor": 4 }